testing the relation between Theory-of-Mind network activation and dispositional anthropomorphism
by Ruud Hortensius and Michaela Kent (University of Glasgow) - June 2019 - …
Data of the Theory-of-Mind functional localiser and Individual Differences in Anthropomorphism Questionnaire are from five different studies.
Dataset_1: Bangor Imaging Unit; EMBOTS; n=29 (including 1 pilot scan); full dataset and publication: Cross…Hortensius (2019) PTRB.
Dataset_2: Centre for Cognitive NeuroImaging; SHAREDBOTS; n=35 (including 2 pilot scans) publication: Hortensius & Cross, in preparation.
Dataset_3: Centre for Cognitive NeuroImaging; Two studies with the same parameters: n=22 (including 2 pilot scans). Social_Gradient_1; n=10 (pilot experiment) and BOLDlight; n=12.
Dataset_4: Centre for Cognitive NeuroImaging; GAMEBOTS; n=22.
Get info for table S1:
library("tidyverse")
#load own data
DF.dataset1 <- read_tsv(file = "/Volumes/Project0255/dataset_1/participants.tsv")
DF.dataset2 <- read_tsv(file = "/Volumes/Project0255/dataset_2/participants.tsv")
DF.dataset3 <- read_tsv(file = "/Volumes/Project0255/dataset_3/participants.tsv")
DF.dataset4 <- read_tsv(file = "/Volumes/Project0255/dataset_4/participants.tsv")
#combine data
bind_rows(DF.dataset1, DF.dataset2, DF.dataset3, DF.dataset4, .id = "dataset") %>%
group_by(dataset) %>%
summarise(mean = mean(age),
sd = sd(age))
bind_rows(DF.dataset1, DF.dataset2, DF.dataset3, DF.dataset4, .id = "dataset") %>%
group_by(dataset, sex) %>%
tally()
All participants completed a Theory-of-Mind localiser (Jacoby et al., 2016; Richardson et al. 2018) and an anatomical scan either in the same session or in two seperate sessions. During the localiser participants passively viewed a short 5.6 min animated film (Partly Cloudy). This movie includes scenes depicting pain (e.g. an alligator biting the main character) and events that trigger mentalizing (e.g. the main character revealing its intention). For dataset_3 and dataset_4 a fieldmap was collected as well. At the end of each experiment participants completed the Individual Differences in Anthropomorphism Questionnaire (IDAQ) (Waytz et al., 2010).
BOLD:
Dataset_1: 3x3x3.5mm voxels, 32 slices, repetition time = 2s, echo time = 30ms
Dataset_2: 3mm isotropic, 37 slices, TR = 2s, TE = 30ms
Dataset_3: 2mm isotropic, 68 slices, TR = 2s, TE = 26ms
Dataset_4: 2.75 x 2.75 x 4mm, 32 slices, TR = 2s, TE = 13 and 31ms
T1W:
Dataset_1: 1mm isotropic resolution, TR = 12ms, TE = 3.47 / 5.15 / 6.83 / 8.52 / 10.20ms (SENSE)
Dataset_2 - 4: 1mm isotropic resolution, TR = 2.3s, TE = 29.6ms (ADNI)
Fieldmaps:
Dataset_1: no, so –use-syn-sdc
Dataset_2: no, so –use-syn-sdc
Dataset_3: yes
Dataset_4: yes
Note: for the code chunk the language is listed, but all except for r-chunks are executed in the terminal
For this you need HeuDiConv Heuristic DICOM Converter.
Based on the tutorial by Franklin Feingold.
Dowload the latest version of Heudiconv (we used 0.6.0.dev1):
docker pull nipy/heudiconv:latest
If on the GRID do:
singularity pull docker://nipy/heudiconv:latest
Create the info file (dataset_2 - 4):
docker run --rm -it -v /Volumes/Project0255/:/base nipy/heudiconv:latest -d /base/dataset_3/sourcedata/sub-{subject}/*.IMA -o /base/dataset_3 -f convertall -s 315 -c none --overwrite
For dataset_1 we first need to convert the .dcm from jpeg-2000 lossless to uncompressed dcm (thanks to Michele Svanera for the code):
python3 convert_all_compressed_dicom.py
Create the info file (dataset_1):
docker run --rm -it -v /Volumes/Project0255/:/base nipy/heudiconv:latest -d /base/dataset_1/sourcedata/sub-{subject}/ses-{session}/*.dcm -o /base/dataset_1 -f convertall -s 129 -ss 01 -c none --overwrite
Get the info file:
cp /Volumes/Project0255/code/.heudiconv/301/info/dicominfo.tsv /Volumes/Project0255/code
Create the following python file and save it in the code folder. There is one functional task (func_movie) and one anatomical (t1w). Dataset_3 and 4 have a field map as well (fmap_phase and fmap_magnitude)
Create a heuristic to automatically convert the files:
import os
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
session: session id (only for dataset_1)
"""
t1w1 = create_key('sub-{subject}/{session}/anat/sub-{subject}_{session}_T1w')
func_movie1 = create_key('sub-{subject}/{session}/func/sub-{subject}_{session}_task-movie_bold')
t1w = create_key('sub-{subject}/anat/sub-{subject}_T1w')
func_movie = create_key('sub-{subject}/func/sub-{subject}_task-movie_bold')
func_movie_echo_1 = create_key('sub-{subject}/func/sub-{subject}_task-movie_echo-1_bold')
func_movie_echo_2 = create_key('sub-{subject}/func/sub-{subject}_task-movie_echo-2_bold')
fmap_phase = create_key('sub-{subject}/fmap/sub-{subject}_phasediff')
fmap_magnitude = create_key('sub-{subject}/fmap/sub-{subject}_magnitude')
info = {t1w1: [], func_movie1: [], t1w: [], func_movie: [], fmap_phase: [], fmap_magnitude: [],
func_movie_echo_1: [], func_movie_echo_2: []}
for idx, s in enumerate(seqinfo):
if ('T1W_1mm_sag SENSE' in s.protocol_name):
info[t1w1].append(s.series_id)
if ('ToM_PartlyCloudy SENSE' in s.protocol_name):
info[func_movie1].append(s.series_id)
if ('t1_mpr_ns_sag_iso_ADNI_32ch' in s.protocol_name):
info[t1w].append(s.series_id)
if ('t1_mpr_ns_sag_P2_ADNI_32ch' in s.protocol_name):
info[t1w].append(s.series_id)
if (s.dim4 == 175) and ('FMRI_MB2_p2_2MMISO_TR2_movie' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.dim4 == 175) and ('FMRI_MB2_movie_p2_2MMISO_TR2' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.dim4 == 170) and ('ep2d_ToM_Loc' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.dim4 == 175) and ('ep2d_ToM_Loc' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.dim4 == 175) and ('ep2d_ToM_Loc_boldTR2' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.TE == 13) and ('BP_ep2d_multiecho_32ch_p3_TOM' in s.protocol_name):
info[func_movie_echo_1].append(s.series_id)
if (s.TE == 31.36) and ('BP_ep2d_multiecho_32ch_p3_TOM' in s.protocol_name):
info[func_movie_echo_2].append(s.series_id)
if (s.dim3 == 92) and ('gre_field_mapping_AAH' in s.protocol_name):
info[fmap_magnitude].append(s.series_id)
if (s.dim3 == 46) and ('gre_field_mapping_AAH' in s.protocol_name):
info[fmap_phase].append(s.series_id)
if (s.dim3 == 64) and ('gre_field_mapping_AAH' in s.protocol_name):
info[fmap_magnitude].append(s.series_id)
if (s.dim3 == 32) and ('gre_field_mapping_AAH' in s.protocol_name):
info[fmap_phase].append(s.series_id)
return info
Use the heuristic file to convert the Dicom files to .nii.gz (nifti) and create .json files:
docker run --rm -it -v /Volumes/Project0255/:/base nipy/heudiconv:latest -d /base/dataset_4/sourcedata/sub-{subject}/*.IMA -o /base/dataset_4 -f /base/code/heuristic_anthrom.py -s 401 -c dcm2niix -b --overwrite
For dataset_1 (for dataset_1 add ses-{session}/ and –ss 01 and .dcm). Movie for sub-101 and 102 is in ses-02:
docker run --rm -it -v /Volumes/Project0255/:/base nipy/heudiconv:latest -d /base/dataset_1/sourcedata/sub-{subject}/ses-{session}/*.dcm -o /base/dataset_1 -f /base/code/heuristic_anthrom.py -s 121 -ss 02 -c dcm2niix -b --overwrite
On the grid do (Sub-116 was done manually in dcm2niigui):
Type in bash before running
Dataset_1:
singularity run -B /analyse/Project0255/:/base /analyse/Project0255/my_images/heudiconv_latest.sif -d /base/dataset_1/sourcedata/sub-{subject}/ses-{session}/*.dcm -o /base/dataset_1/ -f /base/code/heuristic_anthrom.py -s 116 -ss 01 -c dcm2niix -b --overwrite
Dataset_2 - 4:
singularity run -B /analyse/Project0255/:/base /analyse/Project0255/my_images/heudiconv_latest.sif -d /base/dataset_2/sourcedata/sub-{subject}/*.IMA -o /base/dataset_2/ -f /base/code/heuristic_anthrom.py -s 201 -c dcm2niix -b --overwrite
Deface using Pydeface:
#!/bin/bash
set -e
####For loop that defaces the MRI per subject and replaces the old MRI with the new defaced MRI
rootfolder=/Volumes/Project0255/dataset_4
for subj in 401; do
echo "Defacing participant $subj"
pydeface ${rootfolder}/sub-${subj}/anat/sub-${subj}_T1w.nii.gz
rm -f ${rootfolder}/sub-${subj}/anat/sub-${subj}_T1w.nii.gz
mv ${rootfolder}/sub-${subj}/anat/sub-${subj}_T1w_defaced.nii.gz ${rootfolder}/sub-${subj}/anat/sub-${subj}_T1w.nii.gz
done
For dataset_1: ses-01: 101 102 103 107 112 113 117 118 119 122 123 124 128 ses-02: 104 105 106 108 109 110 111 115 116 120 121 125 126 127
#!/bin/bash
set -e
rootfolder=/Volumes/Project0255/dataset_1
for subj in 129; do
echo "Defacing participant $subj"
for session in 01; do
for echo in 1 2 3 4 5; do
pydeface ${rootfolder}/sub-${subj}/ses-${session}/anat/sub-${subj}_ses-${session}_echo-${echo}_T1w.nii.gz
rm -f ${rootfolder}/sub-${subj}/ses-${session}/anat/sub-${subj}_ses-${session}_echo-${echo}_T1w.nii.gz
mv ${rootfolder}/sub-${subj}/ses-${session}/anat/sub-${subj}_ses-${session}_echo-${echo}_T1w_defaced.nii.gz ${rootfolder}/sub-${subj}/ses-${session}/anat/sub-${subj}_ses-${session}_echo-${echo}_T1w.nii.gz
done
done
done
You need to specify “IntendedFor” field in the _phasediff.json files to point which scans the estimated fieldmap should be applied to.
Run the following script (thanks to Michele Svanera for the code):
python change_json.py
For dataset_4 we need to combine the two echo’s (see NeuroStar for more info. We created a dual_sum volume by adding the two images together (see Halai et al. 2014.
Run the following script (thanks to Tyler Morgan for the code):
python sum_echo.py
Create tsv file for functional localiser. Event coding (in s; 10s of fixation before movie starts; accounting for hemodynamic lag) is based on Richardson et al. 2018 - reverse correlation analyses.
Note: For sub-322 the trigger was at the start of the movie (thus create a different tsv, with event - 10s). Check the triggers for dataset_1.
PartlyCloudy <- data.frame(onset = c(86, 98, 120, 176, 238, 252, 300, 70, 92, 106, 136, 194, 210, 228, 262, 312), #create the events (same for every sub)
duration = c(4, 6, 4, 16, 6, 8, 6, 4, 2, 4, 10, 4, 12, 6, 6, 4),
trial_type = c(rep("mental",7), rep("pain",9)))
#dataset_1
for (sub in 102:129){ #note: localisers for sub-101 are in ses-02
filename = paste("/Volumes/Project0255/dataset_1/sub-", sub, "/ses-01/func/sub-", sub, "_ses-01_task-movie_events.tsv", sep ="")
write.table(PartlyCloudy, file = filename, sep="\t", row.names = FALSE, quote = FALSE)
}
#dataset_2
for (sub in 201:235){
filename = paste("/Volumes/Project0255/dataset_2/sub-", sub, "/func/sub-", sub, "_task-movie_events.tsv", sep ="")
write.table(PartlyCloudy, file = filename, sep="\t", row.names = FALSE, quote = FALSE)
}
#dataset_3
for (sub in 301:322){ #note: localisers for sub-322 should have t-10 (no trigger) <-manually correct this
filename = paste("/Volumes/Project0255/dataset_3/sub-", sub, "/func/sub-", sub, "_task-movie_events.tsv", sep ="")
write.table(PartlyCloudy, file = filename, sep="\t", row.names = FALSE, quote = FALSE)
}
#dataset_4
for (sub in 401:422){
filename = paste("/Volumes/Project0255/dataset_4/sub-", sub, "/func/sub-", sub, "_task-movie_events.tsv", sep ="")
write.table(PartlyCloudy, file = filename, sep="\t", row.names = FALSE, quote = FALSE)
}
Use the BIDS-Validator to check if the dataset is BIDS compliant:
docker run -ti --rm -v /Volumes/Project0255/dataset_4:/data:ro bids/validator /data
MRIQC is a docker tool to do quality control of the data. More info here.
MRIQC 0.14.2 was used:
docker run -it --rm -v /Volumes/Project0255/dataset_1/:/data:ro -v /Volumes/Project0255/dataset_1/derivatives/mriqc:/out poldracklab/mriqc:0.14.2 /data /out participant --participant-label 101 -m T1w bold --ica --fft-spikes-detector
On the grid do (cd in /analyse folder):
singularity run --cleanenv /analyse/Project0255/my_images/mriqc-0.14.2.simg /analyse/Project0255/dataset_1 /analyse/Project0255/dataset_1/derivatives/mriqc participant --participant-label 123 -m T1w bold --ica --fft-spikes-detector -w /analyse/Project0255/work
Run it seperately for the datasets. Change participant to group to create the group reports:
docker run -it --rm -v /Volumes/Project0255/dataset_4/:/data:ro -v /Volumes/Project0255/dataset_4/derivatives/mriqc:/out poldracklab/mriqc:0.14.2 /data /out group
Plot the output. This is based on MRIQCeption. The MRIQCeption Visualization by Catherine Walsh was adapted. Adjust the filter if you want to look at different measures.
Adjust this to your liking (e.g. bold: fd_mean, fd_perc, dvars_std, dvars_vstd, gcor, tsnr, t1w: cjv, cnr, snr, efc, inu, wm2max, fwhm) and modality (bold or t1w):
QCmeasure <- "fd_mean"
modality <- "bold"
Run the following code. Change the script below to load the group results for the different datasets:
#libraries
library("tidyverse")
source("/Volumes/Project0255/code/R_rainclouds.R")
#load own data
DF.dataset1 <- read_tsv(file = paste("/Volumes/Project0255/dataset_1/derivatives/mriqc/group_", modality, ".tsv", sep ="")) %>%
gather("measure", "value", 2:46) %>%
select("bids_name","measure", "value")
DF.dataset2 <- read_tsv(file = paste("/Volumes/Project0255/dataset_2/derivatives/mriqc/group_", modality, ".tsv", sep ="")) %>%
gather("measure", "value", 2:46) %>%
select("bids_name","measure", "value")
DF.dataset3 <- read_tsv(file = paste("/Volumes/Project0255/dataset_3/derivatives/mriqc/group_", modality, ".tsv", sep ="")) %>%
gather("measure", "value", 2:46) %>%
select("bids_name","measure", "value")
DF.dataset4 <- read_tsv(file = paste("/Volumes/Project0255/dataset_4/derivatives/mriqc/group_", modality, ".tsv", sep ="")) %>%
gather("measure", "value", 2:46) %>%
select("bids_name","measure", "value")
#select the most relevant measures
#selectionMeasure <- c("snr", "tsnr", "efc", "fber", "gsr_x", "gsr_y", "dvars_nstd", "dvars_std", "dvars_vstd", "gcor", "fd_mean", "fd_number", "fd_percentage", "spikes", "aor", "aqi")
#combine data
DF.full <- bind_rows(DF.dataset1, DF.dataset2, DF.dataset3, DF.dataset4, .id = "dataset") %>%
group_by(dataset) %>%
filter(measure == QCmeasure) #%in% c(selectionMeasure))
#create raincloud plot (check out the [github](https://github.com/RainCloudPlots/) or [preprint](https://wellcomeopenresearch.org/articles/4-63/v1)
p <- ggplot(DF.full,aes(x=dataset,y=value,fill=dataset))+
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, alpha = .5, colour = NA)+
geom_point(aes(colour = dataset), position=position_jitter(width = .05), size = .5, shape = 20)+
geom_boxplot(aes(x=dataset,y=value),position=position_nudge(x = .1, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black")+
#facet_wrap(. ~ dataset) +
theme_classic() + ylab(QCmeasure) + scale_fill_brewer(palette = "Reds") +
scale_colour_brewer(palette = "Reds") + ggtitle(paste("Comparison of", modality, "QC measure", QCmeasure, "between datasets")) +
facet_wrap(~measure)
p
fMRIprep is a docker tool for preprocessing of the fMRI data. More info here
fMRIprep version 1.5.2 was used on a local iMac.
If you run into memory problems you can use –skip_bids_validation; skipped the –write-graph flag to save space, and –use-syn-sdc only for dataset_1 and datatset_2.
If run on the GRID, cd into the analyse folder and run:
singularity run --cleanenv /analyse/Project0255/my_images/fmriprep-1.5.2.simg /analyse/Project0255/dataset_1/ /analyse/Project0255/dataset_1/derivatives participant --participant-label sub-129 --fs-license-file /analyse/Project0255/my_images/license.txt --skip_bids_validation --use-syn-sd --fs-no-reconall -w /analyse/Project0255/work/compute00
Resize functional files for two participants (sub-117 and sub-125) from dataset_1 (sub-{sub}_ses-01_task-movie_space-MNI152NLin2009cAsym_desc-preproc_bold.nii) to allow for group comparison (run this in MATLAB):
voxsiz = [3 3 3.5]; % new voxel size {mm}
V = spm_select([1 Inf],'image');
V = spm_vol(V);
for i=1:numel(V)
bb = spm_get_bbox(V(i));
VV(1:2) = V(i);
VV(1).mat = spm_matrix([bb(1,:) 0 0 0 voxsiz])*spm_matrix([-1 -1 -1]);
VV(1).dim = ceil(VV(1).mat \ [bb(2,:) 1]' - 0.1)';
VV(1).dim = VV(1).dim(1:3);
spm_reslice(VV,struct('mean',false,'which',1,'interp',0)); % 1 for linear
end
Example MATLAB script (dataset 3):
%========================================================================
% SPM first-level analysis for fmriprep data in BIDS format
%========================================================================
% This script is written by Ruud Hortensius and Michaela Kent
% (University of Glasgow). Based upon a script written by
% Shengdong Chen (ACRLAB) and Stephan Heunis (TU Eindhoven).
%
% Added: loop for runs
% Parameters as specified by Saxelab: https://saxelab.mit.edu/theory-mind-and-pain-matrix-localizer-movie-viewing-experiment
%
% Last updated: January 2020
%========================================================================
clear all
%% Inputdirs
BIDS = spm_BIDS('/Volumes/Project0255/dataset_3/'); % Parse BIDS directory (easier to query info from dataset)
BIDSpreproc=fullfile(BIDS.dir,'derivatives/fmriprep'); % get the preprocessed directory
%sublist = spm_BIDS(BIDS,'subjects') %number of subjects
sublist = transpose(BIDS.participants.participant_id) %get subject list including the 'sub'
subex = []
sublist(subex) = []; %update the subjects
taskid='movie'; %specify the task to be analysed
numScans=175; %The number of volumes per run <---
TR = 2; % Repetition time, in seconds <---
unit='secs'; % onset times in secs (seconds) or scans (TRs)
%% Outputdirs
outputdir=fullfile(BIDS.dir,'derivatives/bids_spm/first_level'); % root outputdir for sublist
spm_mkdir(outputdir,char(sublist), char(taskid)); % create output directory
%% Loop for sublist
spm('Defaults','fMRI'); %Initialise SPM fmri
spm_jobman('initcfg'); %Initialise SPM batch mode
for i=1:length(sublist)
%% Output dirs where you save SPM.mat
subdir=fullfile(outputdir,sublist{i},taskid);
%% Basic parameters
matlabbatch{1}.spm.stats.fmri_spec.dir = {subdir};
matlabbatch{1}.spm.stats.fmri_spec.timing.units = unit; % specified above
matlabbatch{1}.spm.stats.fmri_spec.timing.RT = TR; % specified above
matlabbatch{1}.spm.stats.fmri_spec.timing.fmri_t = 68; %<--- look into this
matlabbatch{1}.spm.stats.fmri_spec.timing.fmri_t0 = 34; %<--- look into this
%% Load input files for task specilized
sub_inputdir=fullfile(BIDSpreproc,sublist{i},'func');
sub_inputdirA=fullfile(BIDSpreproc,sublist{i},'anat');
%------------------------------------------------------------------
func=[sub_inputdir,filesep,sublist{i},'_task-',taskid,'_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz'];
func_nii=[sub_inputdir,filesep,sublist{i}, '_task-',taskid,'_space-MNI152NLin2009cAsym_desc-preproc_bold.nii'];
if ~exist(func_nii,'file'), gunzip(func)
end
run_scans = spm_select('Expand',func_nii);
matlabbatch{1}.spm.stats.fmri_spec.sess(1).scans = cellstr(run_scans);
% Load the condition files
events = spm_load([BIDS.dir,filesep,sublist{i},'/func/', sublist{i},'_task-',taskid,'_events.tsv']) %load TSV condition file
names{1} = 'mental';
t = strcmp(names{1}, events.trial_type)
onsets{1} = transpose(events.onset(t));
durations{1} = transpose(events.duration(t));
names{2} = 'pain';
t = strcmp(names{2}, events.trial_type)
onsets{2} = transpose(events.onset(t));
durations{2} = transpose(events.duration(t));
file_mat = [subdir,filesep,sublist{i},'_task-',taskid,'_conditions.mat'];
save(file_mat, 'names', 'onsets', 'durations')
matlabbatch{1}.spm.stats.fmri_spec.sess(1).cond = struct('name', {}, 'onset', {}, 'duration', {}, 'tmod', {}, 'pmod', {}, 'orth', {});
matlabbatch{1}.spm.stats.fmri_spec.sess(1).multi = {file_mat};
% Confounds file
confounds=spm_load([sub_inputdir,filesep,sublist{i},'_task-',taskid,'_desc-confounds_regressors.tsv']) ;
confounds_matrix=[confounds.framewise_displacement, confounds.a_comp_cor_00,confounds.a_comp_cor_01,confounds.a_comp_cor_02,confounds.a_comp_cor_03, confounds.a_comp_cor_04,confounds.a_comp_cor_05, confounds.trans_x, confounds.trans_y, confounds.trans_z, confounds.rot_x, confounds.rot_y, confounds.rot_z];
confounds_name=[subdir,filesep,sublist{i},'_task-',taskid,'_acomcorr.txt'];
confounds_matrix(isnan(confounds_matrix)) = 0 % nanmean(confounds_matrix); %check this <-----
if ~exist(confounds_name,'file'), dlmwrite(confounds_name,confounds_matrix)
end
matlabbatch{1}.spm.stats.fmri_spec.sess(1).multi_reg = {confounds_name};
matlabbatch{1}.spm.stats.fmri_spec.sess(1).hpf = 128; % High-pass filter (hpf) without using consine
%% Model (Default)
matlabbatch{1}.spm.stats.fmri_spec.fact = struct('name', {}, 'levels', {});
matlabbatch{1}.spm.stats.fmri_spec.bases.hrf.derivs = [0 0];
matlabbatch{1}.spm.stats.fmri_spec.volt = 1;
matlabbatch{1}.spm.stats.fmri_spec.global = 'Scaling';
mask=[sub_inputdirA,filesep,sublist{i},'_space-MNI152NLin2009cAsym_label-GM_probseg.nii.gz'];
mask_nii=[sub_inputdirA,filesep,sublist{i},'_space-MNI152NLin2009cAsym_label-GM_probseg.nii'];
if ~exist(mask_nii,'file'), gunzip(mask)
end
mask_nii=[mask_nii, ',1']
matlabbatch{1}.spm.stats.fmri_spec.mask = {mask_nii};
matlabbatch{1}.spm.stats.fmri_spec.mthresh = 0.8;
matlabbatch{1}.spm.stats.fmri_spec.cvi = 'none';
%% Model estimation (Default)subdir
matlabbatch{2}.spm.stats.fmri_est.spmmat = {[subdir filesep 'SPM.mat']};
matlabbatch{2}.spm.stats.fmri_est.write_residuals = 0;
matlabbatch{2}.spm.stats.fmri_est.method.Classical = 1;
%% Contrasts
matlabbatch{3}.spm.stats.con.spmmat = {[subdir filesep 'SPM.mat']};
% Set contrasts of interest.
matlabbatch{3}.spm.stats.con.consess{1}.tcon.name = 'mental_pain';
matlabbatch{3}.spm.stats.con.consess{1}.tcon.convec = [1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0];
matlabbatch{3}.spm.stats.con.consess{2}.tcon.name = 'pain_mental';
matlabbatch{3}.spm.stats.con.consess{2}.tcon.convec = [-1 1 0 0 0 0 0 0 0 0 0 0 0 0 0];
matlabbatch{3}.spm.stats.con.delete = 0;
%% Run matlabbatch jobs
spm_jobman('run',matlabbatch);
end
Run the followin commands in the terminal.
Dataset_1:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset1"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset1_ppn101"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset1_ppn114"
Dataset_2:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset2_ppn201_202"
Dataset_3:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset3"
Dataset_4:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset4"
Create a group average for the GM_probseg.nii for each dataset in Matlab (change the code per dataset; run this in MATLAB):
clear all
spm('Defaults','fMRI');
spm_jobman('initcfg');
BIDS = spm_BIDS('/Volumes/Project0255/dataset_3'); %change this
BIDSfirst=fullfile(BIDS.dir,'derivatives/fmriprep');
sublist = transpose(BIDS.participants.participant_id)
subex = [] %subjects that don't have an anatomical (14 dataset_1)
sublist(subex) = [];
for i=1:length(sublist)
subdir=fullfile(BIDSfirst,sublist{i}, 'anat')
matlabbatch{1}.spm.util.imcalc.input{i,1} = [subdir, filesep, sublist{i}, '_space-MNI152NLin2009cAsym_label-GM_probseg.nii,1']
end
matlabbatch{1}.spm.util.imcalc.output = 'dataset3_averageGM';
matlabbatch{1}.spm.util.imcalc.outdir = {'/Volumes/Project0255/dataset_3/derivatives/fmriprep'}; %change this
matlabbatch{1}.spm.util.imcalc.expression = 'mean(X)';
matlabbatch{1}.spm.util.imcalc.var = struct('name', {}, 'value', {});
matlabbatch{1}.spm.util.imcalc.options.dmtx = 1;
matlabbatch{1}.spm.util.imcalc.options.mask = 0;
matlabbatch{1}.spm.util.imcalc.options.interp = 1;
matlabbatch{1}.spm.util.imcalc.options.dtype = 4;
spm_jobman('run',matlabbatch);
Example MATLAB script (dataset 3):
%========================================================================
% SPM second-level analysis for fmriprep data in BIDS format
%========================================================================
% This script is written by Ruud Hortensius and Michaela Kent
% (University of Glasgow)
%
% Last updated: January 2020
%========================================================================
clear all
%% Inputdirs
BIDS = spm_BIDS('/Volumes/Project0255/dataset_3'); % Parse BIDS directory (easier to query info from dataset)
BIDSfirst=fullfile(BIDS.dir,'derivatives/bids_spm/first_level'); % get the first-level directory
sublist = transpose(BIDS.participants.participant_id) %get subject list including the 'sub'
subex = [] %subjects that don't have a second-session
sublist(subex) = []; %update the subjects
%nsession = spm_BIDS(BIDS,'sessions') %how many sessions? careful, sometimes collapsing across sessions not wanted
%sessionid = 'ses-01' %get session id
taskid='movieHC'; %specify the task to be analysed
contrast='con_0001'; %specify the contrast to be analysed
contrast_name='mental_hc'; %specify the name of the contrast
smoothing = 1; %soomthing of first-level contrasts (1=yes, 0=no)
s_kernel = [5 5 5]
%% Outputdirs
outputdir=fullfile(BIDS.dir,'derivatives/bids_spm/second_level', char(contrast_name)); % root outputdir for sublist
spm_mkdir(outputdir); % create output directory
spm('Defaults','fMRI'); %Initialise SPM fmri
spm_jobman('initcfg'); %Initialise SPM batch mode
%% Smoothing of first-level contrasts
if smoothing == 1
for i=1:length(sublist)
subdir=fullfile(BIDSfirst,sublist{i}, taskid);
matlabbatch{1}.spm.spatial.smooth.data{i,1} = [subdir, filesep, contrast, '.nii,1'];
matlabbatch{1}.spm.spatial.smooth.fwhm = s_kernel;
matlabbatch{1}.spm.spatial.smooth.dtype = 0;
matlabbatch{1}.spm.spatial.smooth.im = 0;
matlabbatch{1}.spm.spatial.smooth.prefix = 's';
end
spm_jobman('run',matlabbatch);
clear matlabbatch
end
%% Load the contrasts
matlabbatch{1}.spm.stats.factorial_design.dir = {outputdir};
for i=1:length(sublist)
subdir=fullfile(BIDSfirst,sublist{i}, taskid);
if smoothing == 1
matlabbatch{1,1}.spm.stats.factorial_design.des.t1.scans{i,1} = [subdir, filesep, 's', contrast, '.nii,1']
else
matlabbatch{1,1}.spm.stats.factorial_design.des.t1.scans{i,1} = [subdir, filesep, contrast, '.nii,1']
end
end
matlabbatch{1}.spm.stats.factorial_design.cov = struct('c', {}, 'cname', {}, 'iCFI', {}, 'iCC', {});
matlabbatch{1}.spm.stats.factorial_design.multi_cov = struct('files', {}, 'iCFI', {}, 'iCC', {});
matlabbatch{1}.spm.stats.factorial_design.masking.tm.tm_none = 1;
matlabbatch{1}.spm.stats.factorial_design.masking.im = 1;
matlabbatch{1}.spm.stats.factorial_design.masking.em = {''};
matlabbatch{1}.spm.stats.factorial_design.globalc.g_omit = 1;
matlabbatch{1}.spm.stats.factorial_design.globalm.gmsca.gmsca_no = 1;
matlabbatch{1}.spm.stats.factorial_design.globalm.glonorm = 1;
%% Model estimation
matlabbatch{2}.spm.stats.fmri_est.spmmat = {[outputdir filesep 'SPM.mat']};
matlabbatch{2}.spm.stats.fmri_est.write_residuals = 0;
matlabbatch{2}.spm.stats.fmri_est.method.Classical = 1;
%% Contrast
%--------------------------------------------------------------------------
matlabbatch{3}.spm.stats.con.spmmat = {[outputdir filesep 'SPM.mat']};
matlabbatch{3}.spm.stats.con.consess{1}.tcon.name = contrast_name;
matlabbatch{3}.spm.stats.con.consess{1}.tcon.weights = 1;
matlabbatch{3}.spm.stats.con.consess{1}.tcon.sessrep = 'none';
matlabbatch{3}.spm.stats.con.delete = 0;
%% Results
%--------------------------------------------------------------------------
matlabbatch{4}.spm.stats.results.spmmat = {[outputdir filesep 'SPM.mat']};
matlabbatch{4}.spm.stats.results.conspec.titlestr = '';
matlabbatch{4}.spm.stats.results.conspec.contrasts = 1;
matlabbatch{4}.spm.stats.results.conspec.threshdesc = 'none';
matlabbatch{4}.spm.stats.results.conspec.thresh = 0.001;
matlabbatch{4}.spm.stats.results.conspec.extent = 5;
matlabbatch{4}.spm.stats.results.conspec.conjunction = 1;
matlabbatch{4}.spm.stats.results.conspec.mask.image.name = {'/Volumes/Project0255/dataset_3/derivatives/fmriprep/dataset3_averageGM.nii,1'};
matlabbatch{4}.spm.stats.results.conspec.mask.image.mtype = 0;
matlabbatch{4}.spm.stats.results.units = 1;
matlabbatch{4}.spm.stats.results.export{1}.pdf = true;
matlabbatch{4}.spm.stats.results.export{2}.jpg = true;
matlabbatch{4}.spm.stats.results.export{3}.csv = true;
matlabbatch{4}.spm.stats.results.export{4}.tspm.basename = contrast_name;
%% Run matlabbatch jobs
spm_jobman('run',matlabbatch);
Run it seperately for the datasets:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset1"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset2"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset3"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset4"
Run the following (ROI_extract.m) script in matlab (change the code per dataset and roi and contrast - run this in MATLAB):
%========================================================================
% fROI analysis for fmriprep data in BIDS format
%========================================================================
% This script is written by Michaela Kent and Ruud Hortensius
% (University of Glasgow)
%
% Last updated: January 2020
%========================================================================
clear all
%add marsbar to path
marsbar('on')
%% Inputdirs
BIDS = spm_BIDS('/Volumes/Project0255/dataset_4'); % parse BIDS directory (easier to query info from dataset)
BIDSsecond=fullfile(BIDS.dir,'derivatives/bids_spm/second_level'); % get the second-level directory
contrastid = 'mental' %can be either mental (vs. pain) or pain (vs. mental)
networkid = 'tom' %can be either tom (theory-of-mind) or pain (pain matrix)
%% Outputdirs
outputdir=fullfile(BIDS.dir,'derivatives/roi', networkid); % root outputdir for sublist
spm_mkdir(outputdir); % create output directory
%% Load design matrix
spm_name = spm_load(fullfile(BIDSsecond, filesep, contrastid , 'SPM.mat'))
D = mardo(spm_name);
%% Load rois
parcels = dir(fullfile(BIDS.dir,'derivatives/parcels/', networkid))
parcels = struct2cell(parcels(arrayfun(@(x) ~strcmp(x.name(1),'.'),parcels)))
parcels(2:6,:) = []
for i=1:length(parcels)
roi = fullfile(BIDS.dir,'derivatives/parcels/', networkid, parcels{i})
R = maroi(roi);
% Fetch data into marsbar data object
mY = get_marsy(R, D, 'mean');
roi_data = summary_data(mY); % get summary time course(s)
roi_name = [outputdir,filesep,parcels{i},'.tsv'];
dlmwrite(roi_name,roi_data);
end
Add sub-201 and sub-202 to get the fROI data (different parameters, not included in the whole-brain analysis):
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset2_201_202"
matlab -batch "ROI_extract_201_202"
Dataset 2: sub-206-212, 219, 221-22, 224-25, 228, 231, 233-34 completed a version with the scale ranging from 1-10 instead of 0-10. Analyses should be run with and without these participants:
sub_ex = c(206:212, 219, 221:222, 224:225, 228, 231, 233:234)
Get the IDAQ data for all the participants:
library(tidyverse)
#load data (/Volumes/Project0255/dataset_1/sourcedata/)
DF.d1 <- read_csv(file = paste("IDAQ_dataset1.csv", sep ="")) %>%
gather("sub", "value", 4:32)
Parsed with column specification:
cols(
.default = col_double(),
scale = [31mcol_character()[39m,
subscale = [31mcol_character()[39m
)
See spec(...) for full column specifications.
DF.d2 <- read_csv(file = paste("IDAQ_dataset2.csv", sep ="")) %>%
gather("sub", "value", 4:38)
Parsed with column specification:
cols(
.default = col_double(),
scale = [31mcol_character()[39m,
subscale = [31mcol_character()[39m
)
See spec(...) for full column specifications.
DF.d3 <- read_csv(file = paste("IDAQ_dataset3.csv", sep ="")) %>%
gather("sub", "value", 4:25)
Parsed with column specification:
cols(
.default = col_double(),
scale = [31mcol_character()[39m,
subscale = [31mcol_character()[39m
)
See spec(...) for full column specifications.
DF.d4 <- read_csv(file = paste("IDAQ_dataset4.csv", sep ="")) %>%
gather("sub", "value", 4:25)
Parsed with column specification:
cols(
.default = col_double(),
scale = [31mcol_character()[39m,
subscale = [31mcol_character()[39m
)
See spec(...) for full column specifications.
DF.idaq <- bind_rows(DF.d1, DF.d2, DF.d3, DF.d4, .id = "dataset") %>%
mutate(sub=gsub('sub-','',sub))%>%
transform(sub=as.integer(sub)) %>%
mutate(scale = as.factor(ifelse(scale == "IDAQ-NA", "IDAQNA", "IDAQ")))
rm(DF.d1, DF.d2, DF.d3, DF.d4)
Check the reliability of the IDAQ scale:
library("psych")
Attaching package: ‘psych’
The following objects are masked from ‘package:ggplot2’:
%+%, alpha
DF.idaq %>%
filter(scale == "IDAQ") %>%
#filter(!sub %in% sub_ex) %>%
select(-scale, -subscale) %>%
spread(itemnr, value) %>%
select(-sub, -dataset) %>%
alpha(na.rm = TRUE)
Reliability analysis
Call: alpha(x = ., na.rm = TRUE)
lower alpha upper 95% confidence boundaries
0.75 0.8 0.86
Reliability if an item is dropped:
Item statistics
Check the reliability of the IDAQ-NA scale:
DF.idaq %>%
filter(scale == "IDAQNA") %>%
filter(!sub %in% sub_ex) %>%
select(-scale, -subscale) %>%
spread(itemnr, value) %>%
select(-sub, -dataset) %>%
alpha(na.rm = TRUE)
Reliability analysis
Call: alpha(x = ., na.rm = TRUE)
lower alpha upper 95% confidence boundaries
0.51 0.62 0.73
Reliability if an item is dropped:
Item statistics
Test if there are differences in IDAQ and IDAQ-NA scores between datasets.
Before fitting any model we establish which link function we need to use. This code is taken from Kevin Stadler’s github. In short, it uses the ordinal package to (quickly) find the link function that fits the data:
library(ordinal)
Attaching package: ‘ordinal’
The following object is masked from ‘package:dplyr’:
slice
cumulativemodelfit <- function(formula, data, links=c("logit", "probit", "cloglog", "cauchit"),
thresholds=c("flexible", "equidistant"), verbose=TRUE) {
names(links) <- links
names(thresholds) <- thresholds
llks <- outer(links, thresholds,
Vectorize(function(link, threshold)
# catch error for responses with 2 levels
tryCatch(ordinal::clm(formula, data=data, link=link, threshold=threshold)$logLik,
error = function(e) NA)))
print(llks)
if (verbose) {
bestfit <- which.max(llks)
cat("\nThe best link function is ", links[bestfit %% length(links)], " with a ",
thresholds[1 + bestfit %/% length(thresholds)], " threshold (logLik ", llks[bestfit],
")\n", sep="")
}
invisible(llks)
}
For the anthropomorphism subscale first:
DF.test = DF.idaq %>%
filter(scale == "IDAQ") %>%
mutate_all(as.factor) %>%
mutate(value = factor(value, levels=c(0:10), ordered=TRUE)) #add contrast coding?
Get the link function:
cumulativemodelfit(value ~ 1, data=DF.test)
flexible equidistant
logit -3559.563 -3638.793
probit -3559.563 -3629.921
cloglog -3559.563 -3663.966
cauchit -3559.563 -3629.921
The best link function is logit with a flexible threshold (logLik -3559.563)
library(brms)
Loading required package: Rcpp
Registered S3 method overwritten by 'htmlwidgets':
method from
print.htmlwidget tools:rstudio
Loading 'brms' package (version 2.12.0). Useful instructions
can be found by typing help('brms'). A more detailed introduction
to the package is available through vignette('brms_overview').
Attaching package: ‘brms’
The following objects are masked from ‘package:ordinal’:
ranef, VarCorr
The following object is masked from ‘package:psych’:
cs
The following object is masked from ‘package:stats’:
ar
options(mc.cores = parallel::detectCores()) #run once (run on multiple cores)
ord.1 <- brm(
value ~ 1 + dataset,
data = DF.test,
family = cumulative("logit"),
file = 'ord.1~simple.RDS'
)
Get summary and marginal effects:
summary(ord.1) #prob = .99 for 99 credible intervals
conditional_effects(ord.1, "dataset", categorical = TRUE)
plot(ord.1)
ord.2 <- brm(
value ~ 1 + dataset +
(1|sub) + (1|itemnr),
data = DF.test,
family = cumulative("logit"),
file = 'ord.2~random.RDS'
)
Get summary and marginal effects:
summary(ord.2) #prob = .99 for 99 credible intervals
conditional_effects(ord.2, "dataset", categorical = TRUE)
ord.3 <- brm(
value ~ 1 + cs(dataset) +
(cs(1)|sub) + (cs(1)|itemnr),
data = DF.test,
family = acat("logit"),
file = 'ord.3~category.RDS'
)
Get summary and marginal effects:
summary(ord.3) #prob = .99 for 99 credible intervals
conditional_effects(ord.3, "dataset", categorical = TRUE)
ord.4 <- brm(
value ~ 1 + dataset +
(1|sub) + (1|itemnr),
data = DF.test,
family = acat("logit"),
file = 'ord.4~category2.RDS'
)
Get summary and marginal effects: #should add 1|itemnr and 1|sub
summary(ord.4) #prob = .99 for 99 credible intervals
conditional_effects(ord.4, "dataset", categorical = TRUE)
ord.5 <- brm(
formula = bf(value ~ 1 + dataset) +
lf(disc ~ 0 + dataset, cmc = FALSE),
data = DF.test,
family = cumulative("logit"),
file = 'ord.5~control.RDS'
)
Get summary and marginal effects:
summary(ord.5) #prob = .99 for 99 credible intervals
conditional_effects(ord.5, "dataset", categorical = TRUE)
modelC1 <- LOO(ord.1, ord.2, ord.3, ord.4, ord.5)
save(modelC1, file="modelC1.RData")
load('modelC1.RData')
modelC1
DF.test = DF.test %>%
filter(!sub %in% sub_ex)
cumulativemodelfit(value ~ 1, data=DF.test)
ord.2E <- brm(
value ~ 1 + dataset +
(1|sub) + (1|itemnr),
data = DF.test,
family = cumulative(link = "probit", threshold="equidistant"),
file = 'ord.2E~random.RDS'
)
Get summary and marginal effects:
summary(ord.2E) #prob = .99 for 99 credible intervals
conditional_effects(ord.2E, "dataset", categorical = TRUE)
Calculate the IDAQ per subject:
DF.idaq <- DF.idaq %>%
group_by(sub,dataset, scale) %>%
summarise(score = sum(value, na.rm = TRUE)) %>%
ungroup()%>%
mutate_at(vars(-score),as.factor)
Visualise the scores across the datasets and scales:
Calculate the median IQR per dataset (table S2):
DF.idaq %>%
#filter(!sub %in% sub_ex) %>%
group_by(sub,dataset, scale) %>%
summarise(score = sum(score)) %>%
group_by(scale) %>%
summarise(median = median(score),
iqr = IQR(score))
Create function to load the data for the different networks:
library(fs)
roi_extract <- function(datasetno, substart, subend, network, nroi) {
dir_ls(paste("dataset_", datasetno, "/derivatives/roi/", network, sep = ""), regexp = "\\.tsv$") %>%
map_dfr(read.delim, sep = "\t", .id = "id", header = FALSE) %>%
mutate(dataset = datasetno) %>%
mutate(network = network) %>%
mutate(network = str_extract(network, "tom|pain")) %>%
mutate(id = str_extract(id, "dmpfc|mmpfc|vmpfc|ltpj|rtpj|prec|amcc|lmfg|rmfg|ls2|rs2|linsula|rinsula")) %>%
rename(roi = id, contrast = V1) %>%
mutate(sub = rep(substart:subend, times=nroi, each=1)) %>%
select(5,3,4,1:2)
}
Load the data for the Theory-of-Mind network:
DF.d1 <- roi_extract(1, 101, 129, "tom", 6)
DF.d2.a <- roi_extract(2, 201, 202, "tom/201_202", 6)
DF.d2.b <- roi_extract(2, 203, 235, "tom", 6)
DF.d3 <- roi_extract(3, 301, 322, "tom", 6)
DF.d4 <- roi_extract(4, 401, 422, "tom", 6)
DF.temp <- bind_rows(DF.d1, DF.d2.a, DF.d2.b, DF.d3, DF.d4)
Load the data for the Pain Matrix:
DF.d1 <- roi_extract(1, 101, 129, "pain", 7)
DF.d2.a <- roi_extract(2, 201, 202, "pain/201_202/", 7)
DF.d2.b <- roi_extract(2, 203, 235, "pain", 7)
DF.d3 <- roi_extract(3, 301, 322, "pain", 7)
DF.d4 <- roi_extract(4, 401, 422, "pain", 7)
DF.roi <- bind_rows(DF.temp, DF.d1, DF.d2.a, DF.d2.b, DF.d3, DF.d4)
rm(DF.d1, DF.d2.a, DF.d2.b, DF.d3, DF.d4, DF.temp, DF.test)
Reorder ROI names for plots:
order <- c("rtpj", "ltpj", "prec", "vmpfc","mmpfc","dmpfc", "rs2", "ls2", "rinsula", "linsula", "rmfg", "lmfg", "amcc")
DF.roi <- DF.roi %>%
mutate_at(vars(-contrast),as.factor) %>%
group_by(sub, dataset) %>%
mutate(roi = fct_relevel(roi, order))
Plot the ToM activity across regions and datasets:
p1 <- DF.roi %>%
filter(network == "tom") %>%
ggplot(.,aes(x=roi,y=contrast,fill=roi))+
geom_hline(yintercept = 0, color = "grey", linetype = 2) +
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, colour = "Black") +
geom_point(aes(colour = roi, fill = roi), position=position_jitter(width = .05), size = .5, shape = 21, colour = "Black") +
geom_boxplot(aes(x=roi,y=contrast),position=position_nudge(x = .1, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black") +
theme_classic() +
ylab(paste("contrast estimates (mental > pain)")) +
scale_fill_brewer(palette = "Blues") +
scale_colour_brewer(palette = "Blues") +
ggtitle(paste("ToM network contrasts estimates across datasets")) +
theme(legend.position="none") +
facet_wrap(~dataset)
Error in geom_flat_violin(position = position_nudge(x = 0.2, y = 0), adjust = 2, :
could not find function "geom_flat_violin"
Test if for potential differences between datasets and rois:
tom.compare <- brm(
formula = contrast ~ roi * dataset,
data = DF.roi %>% filter(network == "tom"),
#family = cumulative("logit"),
file = 'tom.compare.RDS'
)
summary(tom.compare)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: contrast ~ roi * dataset
Data: DF.roi %>% filter(network == "tom") (Number of observations: 648)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 0.60 0.08 0.45 0.75 1.00 1342 2591
roiltpj -0.03 0.11 -0.24 0.18 1.00 1872 2830
roiprec 0.39 0.10 0.18 0.60 1.00 1896 2151
roivmpfc -0.29 0.11 -0.50 -0.08 1.00 2070 2687
roimmpfc -0.23 0.11 -0.44 -0.02 1.00 1699 2870
roidmpfc -0.34 0.11 -0.56 -0.13 1.00 1910 2816
dataset2 0.18 0.10 -0.02 0.38 1.00 1633 2490
dataset3 -0.15 0.11 -0.37 0.08 1.00 1859 2690
dataset4 -0.03 0.12 -0.27 0.20 1.00 1818 2126
roiltpj:dataset2 0.09 0.15 -0.20 0.37 1.00 2030 2977
roiprec:dataset2 -0.08 0.14 -0.35 0.20 1.00 2164 2860
roivmpfc:dataset2 -0.03 0.14 -0.31 0.25 1.00 2321 2870
roimmpfc:dataset2 -0.09 0.14 -0.37 0.19 1.00 1506 3054
roidmpfc:dataset2 -0.10 0.14 -0.37 0.19 1.00 2002 2898
roiltpj:dataset3 -0.10 0.16 -0.41 0.22 1.00 2235 2712
roiprec:dataset3 -0.21 0.16 -0.53 0.10 1.00 2261 2701
roivmpfc:dataset3 -0.02 0.16 -0.33 0.29 1.00 2203 3083
roimmpfc:dataset3 -0.00 0.16 -0.32 0.30 1.00 2321 3007
roidmpfc:dataset3 0.11 0.16 -0.21 0.42 1.00 2378 2685
roiltpj:dataset4 -0.04 0.16 -0.36 0.28 1.00 2183 2531
roiprec:dataset4 -0.29 0.16 -0.61 0.03 1.00 2445 2650
roivmpfc:dataset4 -0.15 0.16 -0.46 0.17 1.00 2339 2596
roimmpfc:dataset4 -0.22 0.16 -0.53 0.11 1.00 2245 2733
roidmpfc:dataset4 -0.06 0.16 -0.39 0.26 1.00 2253 2756
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 0.40 0.01 0.38 0.42 1.00 4688 3061
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
plot(tom.compare)
conditional_effects(tom.compare)
Plot the Pain Matrix activity across regions and datasets:
p2 <- DF.roi %>%
filter(network == "pain") %>%
ggplot(.,aes(x=roi,y=contrast,fill=roi))+
geom_hline(yintercept = 0, color = "grey", linetype = 2) +
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, colour = "Black") +
geom_point(aes(colour = roi, fill = roi), position=position_jitter(width = .05), size = .5, shape = 21, colour = "Black") +
geom_boxplot(aes(x=roi,y=contrast),position=position_nudge(x = .1, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black") +
theme_classic() +
ylab(paste("contrast estimates (pain > mental)")) +
scale_fill_brewer(palette = "Reds") +
scale_colour_brewer(palette = "Reds") +
ggtitle(paste("Pain matrix contrasts estimates across datasets")) + theme(legend.position="none") +
facet_wrap(~dataset)
p2
Test if for potential differences between datasets and rois:
pain.compare <- brm(
formula = contrast ~ roi * dataset,
data = DF.roi %>% filter(network == "pain"),
#family = cumulative("logit"),
file = 'pain.compare.RDS'
)
summary(pain.compare)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: contrast ~ roi * dataset
Data: DF.roi %>% filter(network == "pain") (Number of observations: 756)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 0.73 0.07 0.59 0.86 1.00 1162 1832
roils2 -0.14 0.09 -0.32 0.05 1.00 1699 2755
roirinsula -0.42 0.10 -0.61 -0.24 1.00 1592 2578
roilinsula -0.44 0.10 -0.63 -0.26 1.00 1428 2394
roirmfg -0.53 0.10 -0.72 -0.34 1.00 1480 2705
roilmfg -0.35 0.10 -0.54 -0.17 1.00 1735 2682
roiamcc -0.54 0.10 -0.72 -0.35 1.00 1508 2540
dataset2 0.38 0.09 0.19 0.56 1.00 1645 2137
dataset3 -0.11 0.11 -0.32 0.09 1.00 1224 2211
dataset4 -0.08 0.10 -0.29 0.12 1.00 1646 2118
roils2:dataset2 -0.04 0.13 -0.29 0.21 1.00 2224 3033
roirinsula:dataset2 -0.24 0.13 -0.49 0.01 1.00 2080 2455
roilinsula:dataset2 -0.12 0.13 -0.38 0.14 1.00 2196 3008
roirmfg:dataset2 -0.35 0.13 -0.61 -0.09 1.00 2097 2840
roilmfg:dataset2 -0.53 0.13 -0.79 -0.27 1.00 2249 2796
roiamcc:dataset2 -0.12 0.13 -0.37 0.13 1.00 2070 3017
roils2:dataset3 0.05 0.15 -0.24 0.34 1.00 1786 2526
roirinsula:dataset3 -0.01 0.15 -0.29 0.28 1.00 1711 2230
roilinsula:dataset3 0.06 0.15 -0.22 0.35 1.00 1485 2966
roirmfg:dataset3 0.05 0.15 -0.24 0.34 1.00 1734 2535
roilmfg:dataset3 -0.04 0.15 -0.33 0.25 1.00 1881 2503
roiamcc:dataset3 0.22 0.15 -0.07 0.51 1.00 1839 2592
roils2:dataset4 0.06 0.14 -0.22 0.33 1.00 2078 2870
roirinsula:dataset4 0.07 0.15 -0.21 0.35 1.00 2141 2734
roilinsula:dataset4 0.18 0.15 -0.10 0.47 1.00 2028 2931
roirmfg:dataset4 0.07 0.15 -0.22 0.36 1.00 2079 2366
roilmfg:dataset4 -0.17 0.15 -0.45 0.13 1.00 2059 2524
roiamcc:dataset4 0.24 0.15 -0.04 0.53 1.00 2147 2756
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 0.36 0.01 0.35 0.38 1.00 5135 3017
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
plot(pain.compare)
conditional_effects(pain.compare)
Create one DF:
DF.roi <- DF.idaq %>%
group_by(sub,dataset, scale) %>%
ungroup() %>%
left_join(DF.roi, DF.idaq, by = c("sub","dataset"), keep = FALSE) %>%
pivot_wider(names_from=scale, values_from = score) #
Center the variables for the formal analysis:
DF.roi$cent_IDAQNA <- scale(DF.roi$IDAQNA, scale = TRUE)
DF.roi$cent_IDAQ <- scale(DF.roi$IDAQ, scale = TRUE)
DF.roi <- DF.roi %>% group_by(roi) %>% mutate(cent_contrast = scale(contrast, scale =TRUE)) #scale per roi (analyses are done per roi)
Create scatterplots with linear and non-linear lines:
For the formal analysis we will test a linear and quadratic relationship between IDAQ and ToM network activity:
DF.roi$cent_IDAQ2 <- DF.roi$cent_IDAQ^2
We run the models with uninformative (default) priors and create a function to run it for each region separately:
reg_model <- function(region){
brm(formula = cent_contrast ~ cent_IDAQ + cent_IDAQ2,
data = DF.roi %>% filter(roi == region),
family = gaussian,
chains = 4,
iter = 4000,
seed = 42, #so the model is reproducible
file = paste0(region, ".RDS"))
}
Run the models for the ToM network first. It will load the model if it is already calculated:
library("brms")
rtpj <- reg_model("rtpj")
ltpj <- reg_model("ltpj")
prec <- reg_model("prec")
vmpfc <- reg_model("vmpfc")
mmpfc <- reg_model("mmpfc")
dmpfc <- reg_model("dmpfc")
Get the summaries (do some wrangling):
tab_model(model_tom)
Could not access model information.Error in if (fam.info$is_linear) transform <- NULL else transform <- "exp" :
argument is of length zero
summary(rtpj)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: cent_contrast ~ cent_IDAQ + cent_IDAQ2
Data: DF.roi %>% filter(roi == region) (Number of observations: 108)
Samples: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
total post-warmup samples = 8000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 0.03 0.12 -0.20 0.27 1.00 8407 6013
cent_IDAQ -0.05 0.10 -0.24 0.15 1.00 8721 6323
cent_IDAQ2 -0.03 0.07 -0.17 0.10 1.00 9151 5801
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 1.02 0.07 0.90 1.17 1.00 7654 5451
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Get the posterior summaries (do some wrangling):
posterior_summary(rtpj)
posterior_summary(ltpj)
posterior_summary(prec)
posterior_summary(vmpfc)
posterior_summary(mmpfc)
posterior_summary(dmpfc)
Create the posterior plot:
library("bayesplot")
posterior_plots <- function(model){
posterior <- as.matrix(model)
plot_title <- ggtitle(paste(deparse(substitute(model))))
mcmc_areas(posterior,
pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),
prob = 0.8) + plot_title + theme_bw()
}
Plot the posterior distributions with medians and 80% intervals
posterior_plots(tom)
`expand_scale()` is deprecated; use `expansion()` instead.
Create plots (different way):
tom_combined <- bind_rows("rtpj" = as_tibble(as.mcmc(rtpj, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"), combine_chains = TRUE)),
"ltpj" = as_tibble(as.mcmc(ltpj, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"prec" = as_tibble(as.mcmc(prec, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"vmpfc" = as_tibble(as.mcmc(vmpfc, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"mmpfc" = as_tibble(as.mcmc(mmpfc, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"dmpfc" = as_tibble(as.mcmc(dmpfc, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
.id = "roi")
tom_combined <- tom_combined %>%
pivot_longer(c("b_cent_IDAQ", "b_cent_IDAQ2"),names_to = "varIDAQ") %>%
mutate(roi = fct_relevel(roi, order[1:6]))
Plot the distributions:
Create scatterplots with linear and non-linear lines:
p1 <- DF.roi %>%
filter(network == "pain") %>%
ggplot(aes(x=cent_IDAQ, y=cent_contrast)) +
geom_point(alpha=0.5,show.legend = FALSE) +
geom_smooth(method="lm", formula=y ~ x, se=TRUE, show.legend = FALSE, colour="#0072B2") +
geom_smooth(method="lm", formula=y ~ x+ I(x^2), se=TRUE, show.legend=FALSE, colour = "#D55E00") +
#coord_fixed(ratio = 25/1) +
labs(
x="IDAQ score",
y="contrast (pain vs mental)"
) + theme_classic()
p2 <- DF.roi %>%
filter(network == "pain") %>%
ggplot(aes(x=cent_IDAQ, y=cent_contrast)) +
geom_point(alpha=0.5,show.legend = FALSE) +
geom_smooth(method="lm", formula=y ~ x, se=TRUE, show.legend = FALSE, colour="#0072B2") +
geom_smooth(method="lm", formula=y ~ x+ I(x^2), se=TRUE, show.legend=FALSE, colour = "#D55E00") +
#coord_fixed(ratio = 25/1) +
labs(
x="IDAQ score",
y="contrast (pain vs mental)"
) + theme_classic() +
facet_wrap(~roi, nrow = 2)
p1 + p2
Run the models for the Pain Matrix (control network). It will load the model if it is already calculated:
rs2 <- reg_model("rs2")
ls2 <- reg_model("ls2")
rinsula <- reg_model("rinsula")
linsula <- reg_model("linsula")
rmfg <- reg_model("rmfg")
lmfg <- reg_model("lmfg")
amcc <- reg_model("amcc")
summary(linsula)
posterior_summary(linsula)
plot(linsula)
Create the posterior plot: Create plots (different way):
pain_combined <- bind_rows("rs2" = as_tibble(as.mcmc(rs2, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"), combine_chains = TRUE)),
"ls2" = as_tibble(as.mcmc(ls2, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"rinsula" = as_tibble(as.mcmc(rinsula, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"linsula" = as_tibble(as.mcmc(linsula, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"rmfg" = as_tibble(as.mcmc(rmfg, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"lmfg" = as_tibble(as.mcmc(lmfg, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"amcc" = as_tibble(as.mcmc(amcc, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
.id = "roi")
pain_combined <- pain_combined %>%
pivot_longer(c("b_cent_IDAQ", "b_cent_IDAQ2"),names_to = "varIDAQ") %>%
mutate(roi = fct_relevel(roi, order[7:13]))
Plot the distributions:
ggplot(data = pain_combined, aes(value, fill = varIDAQ)) +
geom_density(alpha = .5)+
geom_vline(xintercept = 0, color = "black", linetype = 2) +
ggtitle(paste("Posterior distributions for the Pain Matrix")) +
facet_wrap(~roi, ncol =1) +
coord_fixed(ratio = 1/40) +
theme_classic(base_size = 8) +
theme(strip.background = element_blank(), legend.position="none") +
scale_fill_manual(values=c("#0072B2", "#D55E00"))
#theme(axis.text.y = element_blank())
All ToM regions combined:
tom <- brm(formula = cent_contrast ~ cent_IDAQ + cent_IDAQ2,
data = DF.roi %>% filter(network == "tom"),
family = gaussian,
chains = 4,
iter = 4000,
seed = 42, #so the model is reproducible
file = "tom.RDS")
Summary:
summary(tom)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: cent_contrast ~ cent_IDAQ + cent_IDAQ2
Data: DF.roi %>% filter(network == "tom") (Number of observations: 648)
Samples: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
total post-warmup samples = 8000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 0.05 0.05 -0.04 0.15 1.00 8563 6388
cent_IDAQ 0.04 0.04 -0.04 0.12 1.00 8235 5591
cent_IDAQ2 -0.05 0.03 -0.11 -0.00 1.00 8278 6444
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 1.00 0.03 0.94 1.05 1.00 9276 5605
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Posterior summary:
posterior_summary(tom)
Estimate Est.Error Q2.5 Q97.5
b_Intercept 0.05185312 0.04735636 -0.04164000 0.14518239
b_cent_IDAQ 0.04075624 0.03942079 -0.03653057 0.11819703
b_cent_IDAQ2 -0.05275223 0.02713755 -0.10531989 -0.00148772
sigma 0.99620933 0.02778562 0.94304512 1.05299219
lp__ -922.27993455 1.36794623 -925.66288745 -920.54979959
Posterior plots:
tom_all <- bind_rows("all" = as_tibble(as.mcmc(tom, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"), combine_chains = TRUE), .id = "network") %>%
rename(linear = b_cent_IDAQ, quadratic = b_cent_IDAQ2) %>%
pivot_longer(c("linear", "quadratic"),names_to = "varIDAQ") %>%
mutate(network = "tom"))
All ToM regions combined:
pain <- brm(formula = cent_contrast ~ cent_IDAQ + cent_IDAQ2,
data = DF.roi %>% filter(network == "pain"),
family = gaussian,
chains = 4,
iter = 4000,
seed = 42, #so the model is reproducible
file = "pain.RDS")
Summary:
summary(pain)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: cent_contrast ~ cent_IDAQ + cent_IDAQ2
Data: DF.roi %>% filter(network == "pain") (Number of observations: 756)
Samples: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
total post-warmup samples = 8000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept -0.01 0.04 -0.10 0.08 1.00 8949 6471
cent_IDAQ 0.00 0.04 -0.07 0.07 1.00 8195 5869
cent_IDAQ2 0.01 0.03 -0.04 0.06 1.00 7997 6628
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 1.00 0.03 0.95 1.05 1.00 7882 5243
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Posterior summary:
posterior_summary(pain)
Estimate Est.Error Q2.5 Q97.5
b_Intercept -6.669523e-03 0.04438327 -9.638809e-02 7.976383e-02
b_cent_IDAQ 3.397100e-03 0.03617522 -6.767533e-02 7.318044e-02
b_cent_IDAQ2 6.582722e-03 0.02542831 -4.243062e-02 5.622728e-02
sigma 9.989711e-01 0.02626822 9.484499e-01 1.052677e+00
lp__ -1.077092e+03 1.40501218 -1.080615e+03 -1.075320e+03
Posterior plots:
pain_all <- bind_rows("all" = as_tibble(as.mcmc(pain, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"), combine_chains = TRUE), .id = "network") %>%
rename(linear = b_cent_IDAQ, quadratic = b_cent_IDAQ2) %>%
pivot_longer(c("linear", "quadratic"),names_to = "varIDAQ") %>%
mutate(network = "pain"))
tom_all %>% bind_rows(., pain_all) %>%
mutate_at(vars(-value),as.factor) %>%
rename(predictor = varIDAQ) %>%
ggplot(., aes(value, fill = predictor)) +
geom_density(alpha = .5)+
geom_vline(xintercept = 0, color = "black", linetype = 2) +
ggtitle(paste("Posterior distributions across the two networks")) +
facet_wrap(~network) +
coord_fixed(ratio = 1/40) +
theme_classic(base_size = 16) +
scale_fill_manual(values=c("#0072B2", "#D55E00"))+
theme(strip.background = element_blank())
Update the models by removing the 16 participants that completed a different version of the IDAQ (1-10). We create a function to do this:
update_model <- function(model_old, region){
model_new <- update(model_old, newdata = DF.roi %>% filter(roi == region & !sub %in% sub_ex), seed = 41)
m1 <- posterior_summary(model_old)[c("b_cent_IDAQ", "b_cent_IDAQ2") , "Estimate"]
m2 <- posterior_summary(model_new)[c("b_cent_IDAQ", "b_cent_IDAQ2") , "Estimate"]
tibble(bias = round(100*((m1-m2)/m2), 2), predictor = c("b_cent_IDAQ", "b_cent_IDAQ2"))
}
Calculate the difference (bias) per region:
We see that the influence of this highly informative prior is around 386% and 406% on the two regression coefficients respectively. This is a large difference and we thus certainly would not end up with similar conclusions.
sessionInfo()
R version 3.6.3 (2020-02-29)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] brms_2.12.0 Rcpp_1.0.4.6
loaded via a namespace (and not attached):
[1] Brobdingnag_1.2-6 gtools_3.8.2 StanHeaders_2.21.0-1 threejs_0.3.3 shiny_1.4.0.2 assertthat_0.2.1 stats4_3.6.3 yaml_2.2.1 backports_1.1.6
[10] pillar_1.4.3 lattice_0.20-38 glue_1.4.0 digest_0.6.25 promises_1.1.0 colorspace_1.4-1 htmltools_0.4.0 httpuv_1.5.2 Matrix_1.2-18
[19] plyr_1.8.6 dygraphs_1.1.1.6 pkgconfig_2.0.3 rstan_2.19.3 purrr_0.3.4 xtable_1.8-4 mvtnorm_1.1-0 scales_1.1.0 processx_3.4.2
[28] later_1.0.0 tibble_3.0.1 bayesplot_1.7.1 ggplot2_3.3.0 ellipsis_0.3.0 DT_0.13 shinyjs_1.1 cli_2.0.2 magrittr_1.5
[37] crayon_1.3.4 mime_0.9 ps_1.3.2 fansi_0.4.1 nlme_3.1-144 xts_0.12-0 pkgbuild_1.0.7 colourpicker_1.0 prettyunits_1.1.1
[46] rsconnect_0.8.16 tools_3.6.3 loo_2.2.0 lifecycle_0.2.0 matrixStats_0.56.0 stringr_1.4.0 munsell_0.5.0 callr_3.4.3 compiler_3.6.3
[55] rlang_0.4.5 grid_3.6.3 ggridges_0.5.2 rstudioapi_0.11 htmlwidgets_1.5.1 crosstalk_1.1.0.1 igraph_1.2.5 miniUI_0.1.1.1 base64enc_0.1-3
[64] gtable_0.3.0 inline_0.3.15 abind_1.4-5 markdown_1.1 reshape2_1.4.4 R6_2.4.1 gridExtra_2.3 rstantools_2.0.0 zoo_1.8-7
[73] knitr_1.28 bridgesampling_1.0-0 dplyr_0.8.5 fastmap_1.0.1 shinystan_2.5.0 shinythemes_1.1.2 stringi_1.4.6 parallel_3.6.3 vctrs_0.2.4
[82] tidyselect_1.0.0 xfun_0.13 coda_0.19-3
(3. control across all ROIs - pain) (4. control for each pain ROI)
Add seed at the end!
plot(conditional_effects(ltpj),points = TRUE)
posterior_plots(ltpj)
?conditional_effects
test <- as.matrix(rtpj)
library("broom")
library("rstanarm")
library("tidyverse")
fit <- stan_glm(mpg ~ ., data = mtcars)
summary(fit)
posterior <- as.matrix(fit)
plot_title <- ggtitle("Posterior distributions",
"with medians and 80% intervals")
mcmc_areas(posterior,
pars = c("cyl", "drat", "am", "wt"),
prob = 0.8) + plot_title
posterior <- as.matrix(test2)
plot_title <- ggtitle("Posterior distributions",
"with medians and 80% intervals")
mcmc_areas(posterior,
pars = c("b_cent_IDAQS", "b_cent_IDAQ2S"),
prob = 0.8) + plot_title
summary(test2)
color_scheme_set("red")
ppc_dens_overlay(y = fit$y,
yrep = posterior_predict(fit, draws = 50))
color_scheme_set("brightblue")
test %>%
posterior_predict(draws = 500) %>%
ppc_stat_grouped(y = b_cent_IDAQ2S,
stat = "median")
plot(test2)
plot(ltpj)
plot(prec)
plot(vmpfc)
plot(mmpfc)
plot(dmpfc)
pp_check(rtpj, nsamples = 100) #maximal model
library(doBy)
library(brms)
library(easystats)
library(tidyverse)
rope_range(test) # determine the ROPE range
testR <- bayestestR::equivalence_test(test, range = c(-0.1, 0.1), ci = 0.95) # assert the null
testR <- testR[-c(3),]
plot(testR) + theme_abyss() + scale_fill_flat() # plot the decision on H0
describe_posterior(
rtpj,
effects = "all",
component = "all",
test = c("p_direction", "p_significance"),
centrality = "all"
)
library(bayestestR)
posterior <- distribution_gamma(10000, 1.5) # Generate a skewed distribution
centrality <- point_estimate(posterior) # Get indices of centrality
centrality
plot(centrality)
plot(rtpj)
library(rstan)
Loading required package: StanHeaders
Loading required package: ggplot2
rstan (Version 2.19.3, GitRev: 2e1f913d3ca3)
For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
reg.2 <- brm(formula = cent_contrast ~ (cent_IDAQ + I(cent_IDAQ^2)),
data = DF.test)
print(summary(fit_lin))
me <- marginal_effects(fit_lin, "time") %>%
plot(plot = FALSE) %>%
getElement(1) +
ylim(range(dat_tmp$rating, na.rm = TRUE) + c(-0.5, 0.5))
plot(me)
# quadratic model
fit_quad <- run_model(
brm(
rating ~ (time + I(time^2)) +
Age + Geschlecht + Erstbehandlung + diag +
((time + I(time^2)) | PATNR) + (1 | item),
data = dat_tmp, family = fam,
cores = cores, chains = chains
),
path = paste0("models/fit_hyp1_", meas, "_", fam, "_quad")
)
print(summary(fit_quad))
me <- marginal_effects(fit_quad, "time") %>%
plot(plot = FALSE) %>%
getElement(1) +
ylim(range(dat_tmp$rating, na.rm = TRUE) + c(-0.5, 0.5))
plot(me)
# install.packages("rstanarm") #do not selet the one that needs complilation
library(rstan)
library(rstanarm)
library(ggplot2)
library(bayesplot)
# this option uses multiple cores if they're available
options(mc.cores = parallel::detectCores())
DF.test <- DF.roi %>%
filter(roi == "rtpj")
glm_post1 <- stan_glm(cent_contrast~cent_IDAQ, data=DF.test, family=gaussian) #which family?
glm_post1 <- stan_glm(contrast~IDAQ + I(IDAQ^2), data=DF.test, family=gaussian) #does it need to be centered?
stan_trace(glm_post1, pars=c("(Intercept)","IDAQ","sigma"))
summary(glm_post1)
pp_check(glm_post1)
posterior_vs_prior(glm_post1, group_by_parameter = TRUE, pars=c("(Intercept)"))
posterior_vs_prior(glm_post1, group_by_parameter = TRUE, pars=c("cent_IDAQ","sigma"))
linear.model <-lm(DF.test$cent_contrast ~ DF.test$cent_IDAQ +I(DF.test$cent_IDAQ^2))
glm_fit <- glm(cent_contrast~cent_IDAQ +I(cent_IDAQ^2), data=DF.test, family=gaussian)
glm_fit <- glm(cent_contrast~cent_IDAQ, data=DF.test, family=gaussian)
summary(glm_fit)
summary(linear.model)
plot(DF.test$cent_contrast ~ DF.test$cent_IDAQ, pch=16, ylab = "Counts ", cex.lab = 1.3, col = "red")
abline(lm(DF.test$cent_contrast ~ DF.test$cent_IDAQ), col = "blue")
DF.test$cent_IDAQ2 <- DF.test$cent_IDAQ^2
quadratic.model <-lm(DF.test$cent_contrast ~ DF.test$cent_IDAQ + DF.test$cent_IDAQ2)
summary(quadratic.model)
idaqvalues <- seq(-40, 51, .1)
predictedcounts <- predict(quadratic.model,list(IDAQ=idaqvalues, IDAQ2=idaqvalues^2))
plot(DF.test$cent_contrast ~ DF.test$cent_IDAQ, pch=16, ylab = "Counts ", cex.lab = 1.3, col = "red")
lines(idaqvalues, predictedcounts, col = "darkgreen", lwd = 3)
# OLD code: get fitted values
newdata = data.frame(dataset = levels(as.factor(DF.test$dataset)), itemnr = as.factor(DF.test$itemnr))
fit = fitted(mod1, newdata = newdata, re_formula = NA)
colnames(fit) = c('fit', 'se', 'lwr', 'upr')
df_plot = cbind(newdata, fit)
df_plot
obs = aggregate(value ~ dataset, DF.test, mean)
ggplot(df_plot, aes(x = dataset, y = fit)) +
geom_violin(data=DF.test, aes(x=dataset, y=value), alpha=0.5, color="gray70", fill='gray95') +
geom_jitter(data=obs, aes(x=dataset, y=value), alpha=0.3, position = position_jitter(width = 0.07)) +
geom_errorbar(aes(ymin=lwr, ymax=upr), position=position_dodge(), size=1, width=.5) +
geom_point(shape=21, size=4, fill='red') +
xlab("") +
theme_bw () +
theme(panel.grid = element_blank())